TRAINING SAMPLE DIMENSION REDUCTION BASED ON ASSOCIATION RULES

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dimension Reduction Techniques for Training Polynomial Networks

We propose two novel methods for reducing dimension in training polynomial networks. We consider the class of polynomial networks whose output is the weighted sum of a basis of monomials. Our first method for dimension reduction eliminates redundancy in the training process. Using an implicit matrix structure, we derive iterative methods that converge quickly. A second method for dimension redu...

متن کامل

Process variation dimension reduction based on SVD

We propose an algorithm based on singular value decomposition (SVD) to reduce the number of process variation variables. With few process variation variables, fault simulation and timing analysis under process variation can be performed efficiently. Our algorithm reduces the number of process variation variables while preserving the delay function with respect to process variation. Compared wit...

متن کامل

Dimension Reduction Based on Canonical Correlation

Dimension reduction is helpful and often necessary in exploring nonlinear or nonparametric regression structures with a large number of predictors. We consider using the canonical variables from the design space whose correlations with a spline basis in the response space are significant. The method can be viewed as a variant of sliced inverse regression (SIR) with simple slicing replaced by Bs...

متن کامل

Dimension reduction based on extreme dependence

We introduce a dimension reduction technique based on extreme observations. The classical assumption of a linear model for the distribution of a random vector is replaced by the weaker assumption of a fairly general model for the copula. We assume an elliptical copula to describe the extreme dependence structure, which preserves a ’correlation-like’ structure in the extremes. Based on the tail ...

متن کامل

Likelihood-based Sufficient Dimension Reduction

We obtain the maximum likelihood estimator of the central subspace under conditional normality of the predictors given the response. Analytically and in simulations we found that our new estimator can preform much better than sliced inverse regression, sliced average variance estimation and directional regression, and that it seems quite robust to deviations from normality.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Radio Electronics, Computer Science, Control

سال: 2014

ISSN: 2313-688X,1607-3274

DOI: 10.15588/1607-3274-2014-1-15